40 research outputs found

    Discussion of "Statistical Modeling of Spatial Extremes" by A. C. Davison, S. A. Padoan and M. Ribatet

    Get PDF
    Discussion of "Statistical Modeling of Spatial Extremes" by A. C. Davison, S. A. Padoan and M. Ribatet [arXiv:1208.3378].Comment: Published in at http://dx.doi.org/10.1214/12-STS376B the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    A spatio-temporal model for Red Sea surface temperature anomalies

    Get PDF
    This paper details the approach of team Lancaster to the 2019 EVA data challenge, dealing with spatio-temporal modelling of Red Sea surface temperature anomalies. We model the marginal distributions and dependence features separately; for the former, we use a combination of Gaussian and generalised Pareto distributions, while the dependence is captured using a localised Gaussian process approach. We also propose a space-time moving estimate of the cumulative distribution function that takes into account spatial variation and temporal trend in the anomalies, to be used in those regions with limited available data. The team's predictions are compared to results obtained via an empirical benchmark. Our approach performs well in terms of the threshold-weighted continuous ranked probability score criterion, chosen by the challenge organiser

    Improving statistical models for flood risk assessment

    Get PDF
    Widespread flooding, such as the events in the winter of 2013/2014 in the UK and early summer 2013 in Cent ral Europe, demonst rate clearly how important it is to understand the characterist ics of floods in which mult iple locat ions experience ext reme river flows. Recent developments in mult ivariate stat ist ical modelling help to place such events in a probabilist ic framework. It is now possible to perform joint probability analysis of events defined in terms of physical variables at hundreds of locat ions simultaneously, over mult iple variables (including river flows, rainfall and sea levels), combined with analysis of temporal dependence to capture the evolut ion of events over a large domain. Crit ical const raints on such data-driven methods are the problems of missing data, especially where records over a network are not all concurrent , the joint analysis of several different physical variables, and the choice of suitable t ime scales when combining informat ion from those variables. This paper presents new developments of a high-dimensional condit ional probability model for ext reme river flow events condit ioned on flow and r ainfall observat ions. These are: a new computat ionally efficient paramet ric approach to account for missing data in the joint analysis of ext remes over a large hydromet ric network; a robust approach for the spat ial interpolation of extreme events throughout a large river network,; generat ion of realist ic est imates of ext remes at ungauged locat ions; and, exploit ing rainfall information rat ionally within the stat ist ical model to help improve efficiency. These methodological advances will be illust rated with data from the UK river network and recent events to show how they cont ribute to a flexible and effective framework for flood risk assessment, with applicat ions in the insurance sector and for nat ional-scale emergency planning

    Why extreme floods are more common than you might think

    Get PDF
    Floods in England and Wales have the potential to cause billions of pounds of damage. You might think such extreme events are rare, but they are likely to occur more frequently than expected

    Modelling the clustering of extreme events for short-term risk assessment

    Get PDF
    Reliable estimates of the occurrence rates of extreme events are highly important for insurance companies, government agencies and the general public. The rarity of an extreme event is typically expressed through its return period, i.e. the expected waiting time between events of the observed size if the extreme events of the processes are independent and identically distributed. A major limitation with this measure is when an unexpectedly high number of events occur within the next few months immediately after a T year event, with T large. Such instances undermine the trust in the quality of risk estimates. The clustering of apparently independent extreme events can occur as a result of local non-stationarity of the process, which can be explained by covariates or random effects. We show how accounting for these covariates and random effects provides more accurate estimates of return levels and aids short-term risk assessment through the use of a complementary new risk measure. Supplementary materials accompanying this paper appear online

    Modelling spatial extreme events with environmental applications

    Get PDF
    Spatial extreme value analysis has been an area of rapid growth in the last decade. The focus has been on modelling the spatial componentwise maxima by max-stable processes. Here, we will explain the limitations of these modelling approaches and show how spatial models can be developed that overcome these deficiencies by exploiting the flexible conditional multivariate extremes models of Heffernan and Tawn (2004). We illustrate the benefits of these new spatial models through applications to North Sea wave analysis and to widespread UK river flood risk analysis

    Model-based inference of conditional extreme value distributions with hydrological applications

    Get PDF
    Multivariate extreme value models are used to estimate joint risk in a number of applications, with a particular focus on environmental fields ranging from climatology and hydrology to oceanography and seismic hazards. The semi-parametric conditional extreme value model of Heffernan and Tawn involving a multivariate regression provides the most suitable of current statistical models in terms of its flexibility to handle a range of extremal dependence classes. However, the standard inference for the joint distribution of the residuals of this model suffers from the curse of dimensionality because, in a d-dimensional application, it involves a d−1-dimensional nonparametric density estimator, which requires, for accuracy, a number points and commensurate effort that is exponential in d. Furthermore, it does not allow for any partially missing observations to be included, and a previous proposal to address this is extremely computationally intensive, making its use prohibitive if the proportion of missing data is nontrivial. We propose to replace the d−1-dimensional nonparametric density estimator with a model-based copula with univariate marginal densities estimated using kernel methods. This approach provides statistically and computationally efficient estimates whatever the dimension, d, or the degree of missing data. Evidence is presented to show that the benefits of this approach substantially outweigh potential misspecification errors. The methods are illustrated through the analysis of UK river flow data at a network of 46 sites and assessing the rarity of the 2015 floods in North West England

    Statistical downscaling for future extreme wave heights in the North Sea

    Get PDF
    For safe offshore operations accurate knowledge of the extreme oceanographic conditions is required. We develop a multi-step statistical downscaling algorithm using data from low resolution global climate model (GCM) and local-scale hindcast data to make predictions of the extreme wave climate in the next 50 year period at locations in the North Sea. The GCM is unable to produce wave data accurately so instead we use its 3-hourly wind speed and direction data. By exploiting the relationships between wind characteristics and wave heights, a downscaling approach is developed to relate the large and local-scale data sets and hence future changes in wind characteristics can be translated into changes in extreme wave distributions. We assess the performance of the methods using within sample testing and apply the method to derive future design levels over the northern North Sea

    Models of everywhere revisited: a technological perspective

    Get PDF
    The concept ‘models of everywhere’ was first introduced in the mid 2000s as a means of reasoning about the environmental science of a place, changing the nature of the underlying modelling process, from one in which general model structures are used to one in which modelling becomes a learning process about specific places, in particular capturing the idiosyncrasies of that place. At one level, this is a straightforward concept, but at another it is a rich multi-dimensional conceptual framework involving the following key dimensions: models of everywhere, models of everything and models at all times, being constantly re-evaluated against the most current evidence. This is a compelling approach with the potential to deal with epistemic uncertainties and nonlinearities. However, the approach has, as yet, not been fully utilised or explored. This paper examines the concept of models of everywhere in the light of recent advances in technology. The paper argues that, when first proposed, technology was a limiting factor but now, with advances in areas such as Internet of Things, cloud computing and data analytics, many of the barriers have been alleviated. Consequently, it is timely to look again at the concept of models of everywhere in practical conditions as part of a trans-disciplinary effort to tackle the remaining research questions. The paper concludes by identifying the key elements of a research agenda that should underpin such experimentation and deployment

    Rethinking data‐driven decision support in flood risk management for a big data age

    Get PDF
    Decision‐making in flood risk management is increasingly dependent on access to data, with the availability of data increasing dramatically in recent years. We are therefore moving towards an era of big data, with the added challenges that, in this area, data sources are highly heterogeneous, at a variety of scales, and include a mix of structured and unstructured data. The key requirement is therefore one of integration and subsequent analyses of this complex web of data. This paper examines the potential of a data‐driven approach to support decision‐making in flood risk management, with the goal of investigating a suitable software architecture and associated set of techniques to support a more data‐centric approach. The key contribution of the paper is a cloud‐based data hypercube that achieves the desired level of integration of highly complex data. This hypercube builds on innovations in cloud services for data storage, semantic enrichment and querying, and also features the use of notebook technologies to support open and collaborative scenario analyses in support of decision making. The paper also highlights the success of our agile methodology in weaving together cross‐disciplinary perspectives and in engaging a wide range of stakeholders in exploring possible technological futures for flood risk management
    corecore